Gradient descent

Results: 277



#Item
91Fast Prediction of New Feature Utility  Hoyt Koepke Department of Statistics University of Washington Seattle, WAMikhail Bilenko Microsoft Research, Redmond, WA 12345

Fast Prediction of New Feature Utility Hoyt Koepke Department of Statistics University of Washington Seattle, WAMikhail Bilenko Microsoft Research, Redmond, WA 12345

Add to Reading List

Source URL: arxiv.org

Language: English - Date: 2012-06-21 20:00:35
92Stability and optimality in stochastic gradient descent  arXiv:1505.02417v1 [stat.ME] 10 May 2015 Panos Toulis, Dustin Tran, and Edoardo M. Airoldi Harvard University

Stability and optimality in stochastic gradient descent arXiv:1505.02417v1 [stat.ME] 10 May 2015 Panos Toulis, Dustin Tran, and Edoardo M. Airoldi Harvard University

Add to Reading List

Source URL: arxiv.org

Language: English - Date: 2015-05-11 20:26:25
    93Stochastic Gradient Descent with Importance Sampling Rachel Ward UT Austin Joint work with Deanna Needell (Claremont McKenna College)

    Stochastic Gradient Descent with Importance Sampling Rachel Ward UT Austin Joint work with Deanna Needell (Claremont McKenna College)

    Add to Reading List

    Source URL: www.ma.utexas.edu

    Language: English - Date: 2014-08-12 15:41:58
      94Gossip Learning with Linear Models on Fully Distributed Data∗ Róbert Ormándi, István Heged˝us, Márk Jelasity University of Szeged and Hungarian Academy of Sciences {ormandi,ihegedus,jelasity}@inf.u-szeged.hu

      Gossip Learning with Linear Models on Fully Distributed Data∗ Róbert Ormándi, István Heged˝us, Márk Jelasity University of Szeged and Hungarian Academy of Sciences {ormandi,ihegedus,jelasity}@inf.u-szeged.hu

      Add to Reading List

      Source URL: www.inf.u-szeged.hu

      Language: English - Date: 2013-02-15 04:43:56
      95c Springer Verlag. The copyright for this contribution is held by Springer Verlag. The original 
 publication is available at www.springerlink.com. Active Contours Methods with Respect to Vickers Indentations

      c Springer Verlag. The copyright for this contribution is held by Springer Verlag. The original publication is available at www.springerlink.com. Active Contours Methods with Respect to Vickers Indentations

      Add to Reading List

      Source URL: wavelab.at

      Language: English - Date: 2014-12-16 01:40:33
      96Sparse Online Learning via Truncated Gradient  John Langford Yahoo! Research

      Sparse Online Learning via Truncated Gradient John Langford Yahoo! Research

      Add to Reading List

      Source URL: hunch.net

      Language: English - Date: 2009-01-11 09:13:59
      97PCA-enhanced stochastic optimization methods Alina Kuznetsova, Gerard Pons-Moll, and Bodo Rosenhahn⋆ Institute for Information Processing (TNT), Leibniz University Hanover, Germany {kuznetso,pons,rosenhahn}@tnt.uni-han

      PCA-enhanced stochastic optimization methods Alina Kuznetsova, Gerard Pons-Moll, and Bodo Rosenhahn⋆ Institute for Information Processing (TNT), Leibniz University Hanover, Germany {kuznetso,pons,rosenhahn}@tnt.uni-han

      Add to Reading List

      Source URL: www.tnt.uni-hannover.de

      Language: English - Date: 2012-08-20 04:53:39
      98CS168: The Modern Algorithmic Toolbox Lecture #15: Gradient Descent Basics Tim Roughgarden & Gregory Valiant May 18,

      CS168: The Modern Algorithmic Toolbox Lecture #15: Gradient Descent Basics Tim Roughgarden & Gregory Valiant May 18,

      Add to Reading List

      Source URL: web.stanford.edu

      Language: English - Date: 2015-05-27 17:30:42
      99One coordinate at a time • Adaboost performs gradient descent on exponential loss • Adds one coordinate (“weak learner”) at each iteration. • Weak learning in binary classification = slightly better than random

      One coordinate at a time • Adaboost performs gradient descent on exponential loss • Adds one coordinate (“weak learner”) at each iteration. • Weak learning in binary classification = slightly better than random

      Add to Reading List

      Source URL: seed.ucsd.edu

      Language: English - Date: 2011-02-10 16:52:04
        100Smoothed Gradients for Stochastic Variational Inference David Blei Department of Computer Science Department of Statistics Columbia University

        Smoothed Gradients for Stochastic Variational Inference David Blei Department of Computer Science Department of Statistics Columbia University

        Add to Reading List

        Source URL: papers.nips.cc

        Language: English - Date: 2014-12-02 20:45:02